Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 13(1): 14679, 2023 09 06.
Artigo em Inglês | MEDLINE | ID: mdl-37674052

RESUMO

Despite the wide range of uses of rabbits (Oryctolagus cuniculus) as experimental models for pain, as well as their increasing popularity as pets, pain assessment in rabbits is understudied. This study is the first to address automated detection of acute postoperative pain in rabbits. Using a dataset of video footage of n = 28 rabbits before (no pain) and after surgery (pain), we present an AI model for pain recognition using both the facial area and the body posture and reaching accuracy of above 87%. We apply a combination of 1 sec interval sampling with the Grayscale Short-Term stacking (GrayST) to incorporate temporal information for video classification at frame level and a frame selection technique to better exploit the availability of video data.


Assuntos
Meios de Comunicação , Aprendizado Profundo , Lagomorpha , Animais , Coelhos , Dor Pós-Operatória , Face
2.
Sci Rep ; 13(1): 8973, 2023 06 02.
Artigo em Inglês | MEDLINE | ID: mdl-37268666

RESUMO

Manual tools for pain assessment from facial expressions have been suggested and validated for several animal species. However, facial expression analysis performed by humans is prone to subjectivity and bias, and in many cases also requires special expertise and training. This has led to an increasing body of work on automated pain recognition, which has been addressed for several species, including cats. Even for experts, cats are a notoriously challenging species for pain assessment. A previous study compared two approaches to automated 'pain'/'no pain' classification from cat facial images: a deep learning approach, and an approach based on manually annotated geometric landmarks, reaching comparable accuracy results. However, the study included a very homogeneous dataset of cats and thus further research to study generalizability of pain recognition to more realistic settings is required. This study addresses the question of whether AI models can classify 'pain'/'no pain' in cats in a more realistic (multi-breed, multi-sex) setting using a more heterogeneous and thus potentially 'noisy' dataset of 84 client-owned cats. Cats were a convenience sample presented to the Department of Small Animal Medicine and Surgery of the University of Veterinary Medicine Hannover and included individuals of different breeds, ages, sex, and with varying medical conditions/medical histories. Cats were scored by veterinary experts using the Glasgow composite measure pain scale in combination with the well-documented and comprehensive clinical history of those patients; the scoring was then used for training AI models using two different approaches. We show that in this context the landmark-based approach performs better, reaching accuracy above 77% in pain detection as opposed to only above 65% reached by the deep learning approach. Furthermore, we investigated the explainability of such machine recognition in terms of identifying facial features that are important for the machine, revealing that the region of nose and mouth seems more important for machine pain classification, while the region of ears is less important, with these findings being consistent across the models and techniques studied here.


Assuntos
Face , Dor , Humanos , Gatos , Animais , Dor/diagnóstico , Dor/veterinária , Nariz , Expressão Facial , Medição da Dor/métodos
3.
Sci Rep ; 12(1): 22611, 2022 12 30.
Artigo em Inglês | MEDLINE | ID: mdl-36585439

RESUMO

In animal research, automation of affective states recognition has so far mainly addressed pain in a few species. Emotional states remain uncharted territories, especially in dogs, due to the complexity of their facial morphology and expressions. This study contributes to fill this gap in two aspects. First, it is the first to address dog emotional states using a dataset obtained in a controlled experimental setting, including videos from (n = 29) Labrador Retrievers assumed to be in two experimentally induced emotional states: negative (frustration) and positive (anticipation). The dogs' facial expressions were measured using the Dogs Facial Action Coding System (DogFACS). Two different approaches are compared in relation to our aim: (1) a DogFACS-based approach with a two-step pipeline consisting of (i) a DogFACS variable detector and (ii) a positive/negative state Decision Tree classifier; (2) An approach using deep learning techniques with no intermediate representation. The approaches reach accuracy of above 71% and 89%, respectively, with the deep learning approach performing better. Secondly, this study is also the first to study explainability of AI models in the context of emotion in animals. The DogFACS-based approach provides decision trees, that is a mathematical representation which reflects previous findings by human experts in relation to certain facial expressions (DogFACS variables) being correlates of specific emotional states. The deep learning approach offers a different, visual form of explainability in the form of heatmaps reflecting regions of focus of the network's attention, which in some cases show focus clearly related to the nature of particular DogFACS variables. These heatmaps may hold the key to novel insights on the sensitivity of the network to nuanced pixel patterns reflecting information invisible to the human eye.


Assuntos
Reconhecimento Facial , Frustração , Animais , Cães , Humanos , Expressão Facial , Emoções , Atenção , Reconhecimento Psicológico
4.
Appl Anim Behav Sci ; 250: 105614, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-36540855

RESUMO

Animal shelters have been found to represent stressful environments for pet dogs, both affecting behavior and influencing welfare. The current COVID-19 pandemic has brought to light new uncertainties in animal sheltering practices which may affect shelter dog behavior in unexpected ways. To evaluate this, we analyzed changes in dog activity levels before COVID-19 and during COVID-19 using an automated video analysis within a large, open-admission animal shelter in New York City, USA. Shelter dog activity was analyzed during two two-week long time periods: (i) just before COVID-19 safety measures were put in place (Feb 26-Mar 17, 2020) and (ii) during the COVID-19 quarantine (July 10-23, 2020). During these two periods, video clips of 15.3 second, on average, were taken of participating kennels every hour from approximately 8 am to 8 pm. Using a two-step filtering approach, a matched sample (based on the number of days of observation) of 34 dogs was defined, consisting of 17 dogs in each group (N1/N2 = 17). An automated video analysis of active/non-active behaviors was conducted and compared to manual coding of activity. The automated analysis validated by comparison to manual coding reaching above 79% accuracy. Significant differences in the patterns of shelter dog activity were observed: less activity was observed in the afternoons before COVID-19 restrictions, while during COVID-19, activity remained at a constant average. Together, these findings suggest that 1) COVID-19 lockdown altered shelter dog in-kennel activity, likely due to changes in the shelter environment and 2) automated analysis can be used as a hands-off tool to monitor activity. While this method of analysis presents immense opportunity for future research, we discuss the limitations of automated analysis and guidelines in the context of shelter dogs that can increase accuracy of detection, as well as reflect on policy changes that might be helpful in mediating canine stress in changing shelter environments.

5.
Sci Rep ; 12(1): 9575, 2022 06 10.
Artigo em Inglês | MEDLINE | ID: mdl-35688852

RESUMO

Facial expressions in non-human animals are closely linked to their internal affective states, with the majority of empirical work focusing on facial shape changes associated with pain. However, existing tools for facial expression analysis are prone to human subjectivity and bias, and in many cases also require special expertise and training. This paper presents the first comparative study of two different paths towards automatizing pain recognition in facial images of domestic short haired cats (n = 29), captured during ovariohysterectomy at different time points corresponding to varying intensities of pain. One approach is based on convolutional neural networks (ResNet50), while the other-on machine learning models based on geometric landmarks analysis inspired by species specific Facial Action Coding Systems (i.e. catFACS). Both types of approaches reach comparable accuracy of above 72%, indicating their potential usefulness as a basis for automating cat pain detection from images.


Assuntos
Expressão Facial , Reconhecimento Facial , Animais , Gatos , Emoções , Face , Humanos , Dor/veterinária , Reconhecimento Psicológico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...